373 research outputs found

    Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    Full text link
    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak's algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments

    Wavelet-Fourier CORSING techniques for multi-dimensional advection-diffusion-reaction equations

    Get PDF
    We present and analyze a novel wavelet-Fourier technique for the numerical treatment of multidimensional advection-diffusion-reaction equations based on the CORSING (COmpRessed SolvING) paradigm. Combining the Petrov-Galerkin technique with the compressed sensing approach, the proposed method is able to approximate the largest coefficients of the solution with respect to a biorthogonal wavelet basis. Namely, we assemble a compressed discretization based on randomized subsampling of the Fourier test space and we employ sparse recovery techniques to approximate the solution to the PDE. In this paper, we provide the first rigorous recovery error bounds and effective recipes for the implementation of the CORSING technique in the multi-dimensional setting. Our theoretical analysis relies on new estimates for the local a-coherence, which measures interferences between wavelet and Fourier basis functions with respect to the metric induced by the PDE operator. The stability and robustness of the proposed scheme is shown by numerical illustrations in the one-, two-, and three-dimensional case

    Multi-Index Monte Carlo: When Sparsity Meets Sampling

    Full text link
    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles's seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles's MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, O(TOL−2).\mathcal{O}(\text{TOL}^{-2}). Moreover, in MIMC, the rate of increase of required memory with respect to TOL\text{TOL} is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC. We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree (TD) type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets..

    Dynamically Orthogonal Approximation for Stochastic Differential Equations

    Full text link
    In this paper, we set the mathematical foundations of the Dynamical Low Rank Approximation (DLRA) method for high-dimensional stochastic differential equations. DLRA aims at approximating the solution as a linear combination of a small number of basis vectors with random coefficients (low rank format) with the peculiarity that both the basis vectors and the random coefficients vary in time. While the formulation and properties of DLRA are now well understood for random/parametric equations, the same cannot be said for SDEs and this work aims to fill this gap. We start by rigorously formulating a Dynamically Orthogonal (DO) approximation (an instance of DLRA successfully used in applications) for SDEs, which we then generalize to define a parametrization independent DLRA for SDEs. We show local well-posedness of the DO equations and their equivalence with the DLRA formulation. We also characterize the explosion time of the DO solution by a loss of linear independence of the random coefficients defining the solution expansion and give sufficient conditions for global existence.Comment: 32 page

    Partitioned Algorithms for Fluid-Structure Interaction Problems in Haemodynamics

    Get PDF
    We consider the fluid-structure interaction problem arising in haemodynamic applications. The finite elasticity equations for the vessel are written in Lagrangian form, while the Navier-Stokes equations for the blood in Arbitrary Lagrangian Eulerian form. The resulting three fields problem (fluid/ structure/ fluid domain) is formalized via the introduction of three Lagrange multipliers and consistently discretized by p-th order backward differentiation formulae (BDFp). We focus on partitioned algorithms for its numerical solution, which consist in the successive solution of the three subproblems. We review several strategies that all rely on the exchange of Robin interface conditions and review their performances reported recently in the literature. We also analyze the stability of explicit partitioned procedures and convergence of iterative implicit partitioned procedures on a simple linear FSI problem for a general BDFp temporal discretization

    Gradient-based optimisation of the conditional-value-at-risk using the multi-level Monte Carlo method

    Full text link
    In this work, we tackle the problem of minimising the Conditional-Value-at-Risk (CVaR) of output quantities of complex differential models with random input data, using gradient-based approaches in combination with the Multi-Level Monte Carlo (MLMC) method. In particular, we consider the framework of multi-level Monte Carlo for parametric expectations and propose modifications of the MLMC estimator, error estimation procedure, and adaptive MLMC parameter selection to ensure the estimation of the CVaR and sensitivities for a given design with a prescribed accuracy. We then propose combining the MLMC framework with an alternating inexact minimisation-gradient descent algorithm, for which we prove exponential convergence in the optimisation iterations under the assumptions of strong convexity and Lipschitz continuity of the gradient of the objective function. We demonstrate the performance of our approach on two numerical examples of practical relevance, which evidence the same optimal asymptotic cost-tolerance behaviour as standard MLMC methods for fixed design computations of output expectations.Comment: 26 pages, 18 figures, 1 table, Related to arXiv:2208.07252, Data available at https://zenodo.org/record/719344
    • …
    corecore